In recent years, neural image compression (NIC) algorithms have shown powerful coding performance. However, most of them are not adaptive to the image content. Although several content adaptive methods have been proposed by updating the encoder-side components, the adaptability of both latents and the decoder is not well exploited. In this work, we propose a new NIC framework that improves the content adaptability on both latents and the decoder. Specifically, to remove redundancy in the latents, our content adaptive channel dropping (CACD) method automatically selects the optimal quality levels for the latents spatially and drops the redundant channels. Additionally, we propose the content adaptive feature transformation (CAFT) method to improve decoder-side content adaptability by extracting the characteristic information of the image content, which is then used to transform the features in the decoder side. Experimental results demonstrate that our proposed methods with the encoder-side updating algorithm achieve the state-of-the-art performance.
translated by 谷歌翻译
Federated learning (FL) has been recognized as a privacy-preserving distributed machine learning paradigm that enables knowledge sharing among various heterogeneous artificial intelligence (AIoT) devices through centralized global model aggregation. FL suffers from model inaccuracy and slow convergence due to the model heterogeneity of the AIoT devices involved. Although various existing methods try to solve the bottleneck of the model heterogeneity problem, most of them improve the accuracy of heterogeneous models in a coarse-grained manner, which makes it still a great challenge to deploy large-scale AIoT devices. To alleviate the negative impact of this problem and take full advantage of the diversity of each heterogeneous model, we propose an efficient framework named HierarchyFL, which uses a small amount of public data for efficient and scalable knowledge across a variety of differently structured models. By using self-distillation and our proposed ensemble library, each hierarchical model can intelligently learn from each other on cloud servers. Experimental results on various well-known datasets show that HierarchyFL can not only maximize the knowledge sharing among various heterogeneous models in large-scale AIoT systems, but also greatly improve the model performance of each involved heterogeneous AIoT device.
translated by 谷歌翻译
作为一种有希望的隐私机器学习方法,联合学习(FL)可以使客户跨客户培训,而不会损害其机密的本地数据。但是,现有的FL方法遇到了不均分布数据的推理性能低的问题,因为它们中的大多数依赖于联合平均(FIDAVG)基于联合的聚合。通过以粗略的方式平均模型参数,FedAvg将局部模型的个体特征黯然失色,这极大地限制了FL的推理能力。更糟糕的是,在每一轮FL培训中,FedAvg向客户端向客户派遣了相同的初始本地模型,这很容易导致对最佳全局模型的局限性搜索。为了解决上述问题,本文提出了一种新颖有效的FL范式,名为FEDMR(联合模型重组)。与传统的基于FedAvg的方法不同,FEDMR的云服务器将收集到的本地型号的每一层层混合,并重组它们以实现新的模型,以供客户端培训。由于在每场FL比赛中进行了细粒度的模型重组和本地培训,FEDMR可以迅速为所有客户找出一个全球最佳模型。全面的实验结果表明,与最先进的FL方法相比,FEDMR可以显着提高推理准确性而不会引起额外的通信开销。
translated by 谷歌翻译
由于它们在现实世界中的广泛采用,提高深神经网络(DNN)的运行时性能至关重要。现有的优化DNN的张量代数表达的方法仅考虑由固定的预定义运算符表示的表达式,在一般表达式之间缺少可能的优化机会。我们提出了Ollie,这是第一个基于衍生的张量程序优化器。 Ollie通过利用一般张量代数表达式之间的转换来优化张量程序,从而实现了一个更大的表达搜索空间,其中包括由先前工作作为特殊情况支持的搜索空间。 Ollie使用基于混合衍生的优化器,该优化器有效地结合了探索性和指导性推导,以快速发现高度优化的表达式。对七个DNN的评估表明,Ollie可以在A100 GPU上胜过2.73 $ \ times $(平均为1.46美元$ \ times $),在V100上最多可超过2.68 $ \ times $(1.51 $ \ times $) GPU分别。
translated by 谷歌翻译
图形离群值检测是一项具有许多应用程序的新兴但至关重要的机器学习任务。尽管近年来算法扩散,但缺乏标准和统一的绩效评估设置限制了它们在现实世界应用中的进步和使用。为了利用差距,我们(据我们所知)(据我们所知)第一个全面的无监督节点离群值检测基准为unod,并带有以下亮点:(1)评估骨架从经典矩阵分解到最新图形神经的骨架的14个方法网络; (2)在现实世界数据集上使用不同类型的注射异常值和自然异常值对方法性能进行基准测试; (3)通过在不同尺度的合成图上使用运行时和GPU存储器使用算法的效率和可扩展性。基于广泛的实验结果的分析,我们讨论了当前渠道方法的利弊,并指出了多个关键和有希望的未来研究方向。
translated by 谷歌翻译
先前的深视频压缩方法仅使用单一运动补偿策略,并且很少采用来自传统标准(例如H.264/h.265)的模式预测技术来进行运动和残留压缩。在这项工作中,我们首先提出了一个粗到精细的(C2F)深视频压缩框架,以进行更好的运动补偿,其中我们以粗到良好的方式进行了两次运动估计,压缩和补偿。我们的C2F框架可以实现更好的运动补偿结果,而不会显着增加位成本。观察高优势网络中的高优势信息(即平均值和方差值)包含不同斑块的判别统计信息,我们还提出了两种有效的超优先指导模式预测方法。具体而言,使用高优势信息作为输入,我们建议两个模式预测网络分别预测最佳块分辨率,以进行更好的运动编码,并决定是否从每个块中跳过剩余信息以进行更好的剩余编码,而无需引入额外的位置,同时带来可忽略的额外计算成本。全面的实验结果表明,配备了新的高位指导模式预测方法,我们提出的C2F视频压缩框架实现了HEVC,UVG和MCL-JCV数据集的最新性能。
translated by 谷歌翻译
对象视觉导航旨在基于代理的视觉观察来转向目标对象。非常希望合理地感知环境并准确控制代理。在导航任务中,我们引入了一个以代理为中心的关系图(ACRG),用于基于环境中的关系学习视觉表示。 ACRG是一种高效且合理的结构,包括两个关系,即物体之间的关系以及代理与目标之间的关系。一方面,我们设计了存储物体之间的相对水平位置的对象水平关系图(OHRG)。请注意,垂直关系不涉及OHRG,我们认为OHRG适合控制策略。另一方面,我们提出了代理 - 目标深度关系图(ATDRG),使代理能够将距离视为目标的距离。为了实现ATDRG,我们利用图像深度来表示距离。鉴于上述关系,代理可以察觉到环境和输出导航操作。鉴于ACRG和位置编码的全局功能构造的可视表示,代理可以捕获目标位置以执行导航操作。人工环境中的实验结果AI2-Thor表明ACRG显着优于看不见的检测环境中的其他最先进的方法。
translated by 谷歌翻译
Label noise is ubiquitous in various machine learning scenarios such as self-labeling with model predictions and erroneous data annotation. Many existing approaches are based on heuristics such as sample losses, which might not be flexible enough to achieve optimal solutions. Meta learning based methods address this issue by learning a data selection function, but can be hard to optimize. In light of these pros and cons, we propose Selection-Enhanced Noisy label Training (SENT) that does not rely on meta learning while having the flexibility of being data-driven. SENT transfers the noise distribution to a clean set and trains a model to distinguish noisy labels from clean ones using model-based features. Empirically, on a wide range of tasks including text classification and speech recognition, SENT improves performance over strong baselines under the settings of self-training and label corruption.
translated by 谷歌翻译
In this paper, we propose a large-scale language pre-training for text GENeration using dIffusion modEl, which is named GENIE. GENIE is a pre-training sequence-to-sequence text generation model which combines Transformer and diffusion. The diffusion model accepts the latent information from the encoder, which is used to guide the denoising of the current time step. After multiple such denoise iterations, the diffusion model can restore the Gaussian noise to the diverse output text which is controlled by the input text. Moreover, such architecture design also allows us to adopt large scale pre-training on the GENIE. We propose a novel pre-training method named continuous paragraph denoise based on the characteristics of the diffusion model. Extensive experiments on the XSum, CNN/DailyMail, and Gigaword benchmarks shows that GENIE can achieves comparable performance with various strong baselines, especially after pre-training, the generation quality of GENIE is greatly improved. We have also conduct a lot of experiments on the generation diversity and parameter impact of GENIE. The code for GENIE will be made publicly available.
translated by 谷歌翻译
Conventional cameras capture image irradiance on a sensor and convert it to RGB images using an image signal processor (ISP). The images can then be used for photography or visual computing tasks in a variety of applications, such as public safety surveillance and autonomous driving. One can argue that since RAW images contain all the captured information, the conversion of RAW to RGB using an ISP is not necessary for visual computing. In this paper, we propose a novel $\rho$-Vision framework to perform high-level semantic understanding and low-level compression using RAW images without the ISP subsystem used for decades. Considering the scarcity of available RAW image datasets, we first develop an unpaired CycleR2R network based on unsupervised CycleGAN to train modular unrolled ISP and inverse ISP (invISP) models using unpaired RAW and RGB images. We can then flexibly generate simulated RAW images (simRAW) using any existing RGB image dataset and finetune different models originally trained for the RGB domain to process real-world camera RAW images. We demonstrate object detection and image compression capabilities in RAW-domain using RAW-domain YOLOv3 and RAW image compressor (RIC) on snapshots from various cameras. Quantitative results reveal that RAW-domain task inference provides better detection accuracy and compression compared to RGB-domain processing. Furthermore, the proposed \r{ho}-Vision generalizes across various camera sensors and different task-specific models. Additional advantages of the proposed $\rho$-Vision that eliminates the ISP are the potential reductions in computations and processing times.
translated by 谷歌翻译